Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Abstract Recent advancements in neurotechnology enable precise spatiotemporal patterns of micros- timulations with single-cell resolution. The choice of perturbation sites must satisfy two key criteria: efficacy in evoking significant responses and selectivity for the desired target effects. This choice is currently based on laborious trial-and-error procedures, unfeasible for sequences of multi-site stimulations. Efficient methods to design complex perturbation patterns are ur- gently needed. Can we design a spatiotemporal pattern of stimulation to steer neural activity and behavior towards a desired target? We outline a method for achieving this goal in two steps. First, we identify the most effective perturbation sites, or hubs, only based on short observations of spontaneous neural activity. Second, we provide an efficient method to design multi-site stimulation patterns by combining approaches from nonlinear dynamical systems, control theory and data-driven methods. We demonstrate the feasibility of our approach using multi-site stimulation patterns in recurrent network models.more » « lessFree, publicly-accessible full text available May 30, 2026
- 
            Free, publicly-accessible full text available June 1, 2026
- 
            ABSTRACT Achieving targeted perturbations of neural activity is essential for dissecting the causal architecture of brain circuits. A crucial challenge in targeted manipulation experiments is the identification ofhigh efficacyperturbation sites whose stimulation exerts desired effects, currently done with costly trial-and-error procedures. Can one predict stimulation effects solely based on observations of the circuit activity, in the absence of perturbation? We answer this question in dissociated neuronal cultures on High-Density Microelectrode Arrays (HD-MEAs), which, compared toin vivopreparations, offer a controllablein vitroplatform that enables precise stimulation and full access to network dynamics. We first reconstruct theperturbome- the full map of network responses to focal electrical stimulation - by sequentially activating individual single sites and quantifying their network-wide effects. The measured perturbome patterns cluster into functional modules, with limited spread across clusters. We then demonstrate that the perturbome can be predicted from spontaneous activity alone. Using short baseline recordings in the absence of perturbations, we estimate Effective Connectivity (EC) and show that it predicts the spatial organization of the perturbome, including spatial clusters and local connectivity. Our results demonstrate that spontaneous dynamics encode the latent causal structure of neural circuits and that EC metrics can serve as effective, model-free proxies for stimulation outcomes. This framework enables data-driven targeting and causal inferencein vitro, with potential applications to more complex preparations such as human iPSC-derived neurons and brain organoids, with implications for both basic research and therapeutic strategies targeting neurological disorders. Significance StatementNeuronal cultures are increasingly used as controllable platforms to study neuronal network dynamics, neuromodulation, and brain-inspired therapies. To fully exploit their potential, we need robust methods to probe and interpret causal interactions. Here, we develop a framework to reconstruct the perturbome—the network-wide map of responses to localized electrical stimulation—and show that it can be predicted from spontaneous activity alone. Using simple, model-free metrics of Effective Connectivity, we reveal that ongoing activity encodes causal structure and provides reliable proxies for stimulation outcomes. This validates EC as a practical measure of causal influence in vitro. Our methodology refines the use of neuronal cultures for brain-on-a-chip approaches, and paves the way for data-driven neuromodulation strategies in human stem cell–derived neurons and brain organoids.more » « lessFree, publicly-accessible full text available May 4, 2026
- 
            Landmark universal function approximation results for neural networks with trained weights and biases provided the impetus for the ubiquitous use of neural networks as learning models in neuroscience and Artificial Intelligence (AI). Recent work has extended these results to networks in which a smaller subset of weights (e.g., output weights) are tuned, leaving other parameters random. However, it remains an open question whether universal approximation holds when only biases are learned, despite evidence from neuroscience and AI that biases significantly shape neural responses. The current paper answers this question. We provide theoretical and numerical evidence demonstrating that feedforward neural networks with fixed random weights can approximate any continuous function on compact sets. We further show an analogous result for the approximation of dynamical systems with recurrent neural networks. Our findings are relevant to neuroscience, where they demonstrate the potential for behaviourally relevant changes in dynamics without modifying synaptic weights, as well as for AI, where they shed light on recent fine-tuning methods for large language models, like bias and prefix-based approaches.more » « lessFree, publicly-accessible full text available January 22, 2026
- 
            Performance during perceptual decision-making exhibits an inverted-U relationship with arousal, but the underlying network mechanisms remain unclear. Here, we recorded from auditory cortex (A1) of behaving mice during passive tone presentation, while tracking arousal via pupillometry. We found that tone discriminability in A1 ensembles was optimal at intermediate arousal, revealing a population-level neural correlate of the inverted-U relationship. We explained this arousal-dependent coding using a spiking network model with a clustered architecture. Specifically, we show that optimal stimulus discriminability is achieved near a transition between a multi-attractor phase with metastable cluster dynamics (low arousal) and a single-attractor phase (high arousal). Additional signatures of this transition include arousal-induced reductions of overall neural variability and the extent of stimulus-induced variability quenching, which we observed in the empirical data. Our results elucidate computational principles underlying interactions between pupil-linked arousal, sensory processing, and neural variability, and suggest a role for phase transitions in explaining nonlinear modulations of cortical computations.more » « less
- 
            Changes in behavioral state, such as arousal and movements, strongly affect neural activity in sensory areas, and can be modeled as long-range projections regulating the mean and variance of baseline input currents. What are the computational benefits of these baseline modulations? We investigate this question within a brain-inspired framework for reservoir computing, where we vary the quenched baseline inputs to a recurrent neural network with random couplings. We found that baseline modulations control the dynamical phase of the reservoir network, unlocking a vast repertoire of network phases. We uncovered a number of bistable phases exhibiting the simultaneous coexistence of fixed points and chaos, of two fixed points, and of weak and strong chaos. We identified several phenomena, including noise-driven enhancement of chaos and ergodicity breaking; neural hysteresis, whereby transitions across a phase boundary retain the memory of the preceding phase. In each bistable phase, the reservoir performs a different binary decision-making task. Fast switching between different tasks can be controlled by adjusting the baseline input mean and variance. Moreover, we found that the reservoir network achieves optimal memory performance at any first-order phase boundary. In summary, baseline control enables multitasking without any optimization of the network couplings, opening directions for brain-inspired artificial intelligence and providing an interpretation for the ubiquitously observed behavioral modulations of cortical activity.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
